46 research outputs found

    Real-time gaze estimation using a Kinect and a HD webcam

    Get PDF
    In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method

    A low-cost head and eye tracking system for realistic eye movements in virtual avatars

    Get PDF
    A virtual avatar or autonomous agent is a digital representation of a human being that can be controlled by either a human or an artificially intelligent computer system. Increasingly avatars are becoming realistic virtual human characters that exhibit human behavioral traits, body language and eye and head movements. As the interpretation of eye and head movements represents an important part of nonverbal human communication it is extremely important to accurately reproduce these movements in virtual avatars to avoid falling into the well-known ``uncanny valley''. In this paper we present a cheap hybrid real-time head and eye tracking system based on existing open source software and commonly available hardware. Our evaluation indicates that the system of head and eye tracking is stable and accurate and can allow a human user to robustly puppet a virtual avatar, potentially allowing us to train an A.I. system to learn realistic human head and eye movements

    Dynamics of earthquake nucleation process represented by the Burridge-Knopoff model

    Full text link
    Dynamics of earthquake nucleation process is studied on the basis of the one-dimensional Burridge-Knopoff (BK) model obeying the rate- and state-dependent friction (RSF) law. We investigate the properties of the model at each stage of the nucleation process, including the quasi-static initial phase, the unstable acceleration phase and the high-speed rupture phase or a mainshock. Two kinds of nucleation lengths L_sc and L_c are identified and investigated. The nucleation length L_sc and the initial phase exist only for a weak frictional instability regime, while the nucleation length L_c and the acceleration phase exist for both weak and strong instability regimes. Both L_sc and L_c are found to be determined by the model parameters, the frictional weakening parameter and the elastic stiffness parameter, hardly dependent on the size of an ensuing mainshock. The sliding velocity is extremely slow in the initial phase up to L_sc, of order the pulling speed of the plate, while it reaches a detectable level at a certain stage of the acceleration phase. The continuum limits of the results are discussed. The continuum limit of the BK model lies in the weak frictional instability regime so that a mature homogeneous fault under the RSF law always accompanies the quasi-static nucleation process. Duration times of each stage of the nucleation process are examined. The relation to the elastic continuum model and implications to real seismicity are discussed.Comment: Title changed. Changes mainly in abstract and in section 1. To appear in European Physical Journal

    Visual search in the (un)real world: How head-mounted displays affect eye movements, head movements and target detection

    No full text
    Kollenberg T, Neumann A, Schneider D, et al. Visual search in the (un)real world: How head-mounted displays affect eye movements, head movements and target detection. In: C.H. Morimoto IH, ed. Proceedings of the Eye Tracking Research &amp; Applications Symposium. New York, NY, USA: ACM; 2010: 121-124. Head-mounted displays (HMDs) that use a see-through display method allow for superimposing computer-generated images upon a real-world view. Such devices, however, normally restrict the user's field of view. Furthermore, low display resolution and display curvature are suspected to make foveal as well as peripheral vision more difficult and may thus affect visual processing. In order to evaluate this assumption, we compared performance and eye-movement patterns in a visual search paradigm under different viewing conditions: participants either wore an HMD, had their field of view restricted by blinders or could avail themselves of an unrestricted field of view (normal viewing). From the head and eye-movement recordings we calculated the contribution of eye rotation to lateral shifts of attention. Results show that wearing an HMD leads to less eye rotation and requires more head movements than under blinders conditions and during normal viewing. #### Images </center
    corecore